与经典线性模型不同,非线性生成模型在统计学习的文献中被稀疏地解决。这项工作旨在引起对这些模型及其保密潜力的关注。为此,我们调用了复制方法,以在反相反的问题中得出渐近归一化的横熵,其生成模型由具有通用协方差函数的高斯随机场描述。我们的推导进一步证明了贝叶斯估计量的渐近统计解耦,并为给定的非线性模型指定了解耦设置。复制解决方案描述了严格的非线性模型建立了全有或全无的相变:存在一个关键负载,最佳贝叶斯推断从完美的学习变为不相关的学习。基于这一发现,我们设计了一种新的安全编码方案,该方案可实现窃听通道的保密能力。这个有趣的结果意味着,严格的非线性生成模型是完美的,没有任何安全编码。我们通过分析说明性模型的完全安全和可靠的推论来证明后一种陈述是合理的。
translated by 谷歌翻译
Modeling perception sensors is key for simulation based testing of automated driving functions. Beyond weather conditions themselves, sensors are also subjected to object dependent environmental influences like tire spray caused by vehicles moving on wet pavement. In this work, a novel modeling approach for spray in lidar data is introduced. The model conforms to the Open Simulation Interface (OSI) standard and is based on the formation of detection clusters within a spray plume. The detections are rendered with a simple custom ray casting algorithm without the need of a fluid dynamics simulation or physics engine. The model is subsequently used to generate training data for object detection algorithms. It is shown that the model helps to improve detection in real-world spray scenarios significantly. Furthermore, a systematic real-world data set is recorded and published for analysis, model calibration and validation of spray effects in active perception sensors. Experiments are conducted on a test track by driving over artificially watered pavement with varying vehicle speeds, vehicle types and levels of pavement wetness. All models and data of this work are available open source.
translated by 谷歌翻译
Recently, RNN-Transducers have achieved remarkable results on various automatic speech recognition tasks. However, lattice-free sequence discriminative training methods, which obtain superior performance in hybrid modes, are rarely investigated in RNN-Transducers. In this work, we propose three lattice-free training objectives, namely lattice-free maximum mutual information, lattice-free segment-level minimum Bayes risk, and lattice-free minimum Bayes risk, which are used for the final posterior output of the phoneme-based neural transducer with a limited context dependency. Compared to criteria using N-best lists, lattice-free methods eliminate the decoding step for hypotheses generation during training, which leads to more efficient training. Experimental results show that lattice-free methods gain up to 6.5% relative improvement in word error rate compared to a sequence-level cross-entropy trained model. Compared to the N-best-list based minimum Bayes risk objectives, lattice-free methods gain 40% - 70% relative training time speedup with a small degradation in performance.
translated by 谷歌翻译
ASR can be improved by multi-task learning (MTL) with domain enhancing or domain adversarial training, which are two opposite objectives with the aim to increase/decrease domain variance towards domain-aware/agnostic ASR, respectively. In this work, we study how to best apply these two opposite objectives with speaker labels to improve conformer-based ASR. We also propose a novel adaptive gradient reversal layer for stable and effective adversarial training without tuning effort. Detailed analysis and experimental verification are conducted to show the optimal positions in the ASR neural network (NN) to apply speaker enhancing and adversarial training. We also explore their combination for further improvement, achieving the same performance as i-vectors plus adversarial training. Our best speaker-based MTL achieves 7\% relative improvement on the Switchboard Hub5'00 set. We also investigate the effect of such speaker-based MTL w.r.t. cleaner dataset and weaker ASR NN.
translated by 谷歌翻译
The pre-training of masked language models (MLMs) consumes massive computation to achieve good results on downstream NLP tasks, resulting in a large carbon footprint. In the vanilla MLM, the virtual tokens, [MASK]s, act as placeholders and gather the contextualized information from unmasked tokens to restore the corrupted information. It raises the question of whether we can append [MASK]s at a later layer, to reduce the sequence length for earlier layers and make the pre-training more efficient. We show: (1) [MASK]s can indeed be appended at a later layer, being disentangled from the word embedding; (2) The gathering of contextualized information from unmasked tokens can be conducted with a few layers. By further increasing the masking rate from 15% to 50%, we can pre-train RoBERTa-base and RoBERTa-large from scratch with only 78% and 68% of the original computational budget without any degradation on the GLUE benchmark. When pre-training with the original budget, our method outperforms RoBERTa for 6 out of 8 GLUE tasks, on average by 0.4%.
translated by 谷歌翻译
Automatic speech recognition (ASR) has been established as a well-performing technique for many scenarios where lots of labeled data is available. Additionally, unsupervised representation learning recently helped to tackle tasks with limited data. Following this, hardware limitations and applications give rise to the question how to efficiently take advantage of large pretrained models and reduce their complexity for downstream tasks. In this work, we study a challenging low resource conversational telephony speech corpus from the medical domain in Vietnamese and German. We show the benefits of using unsupervised techniques beyond simple fine-tuning of large pre-trained models, discuss how to adapt them to a practical telephony task including bandwidth transfer and investigate different data conditions for pre-training and fine-tuning. We outperform the project baselines by 22% relative using pretraining techniques. Further gains of 29% can be achieved by refinements of architecture and training and 6% by adding 0.8 h of in-domain adaptation data.
translated by 谷歌翻译
最近,与神经网络的时间相关微分方程的解决方案最近引起了很多关注。核心思想是学习控制解决方案从数据演变的法律,该数据可能会被随机噪声污染。但是,与其他机器学习应用相比,通常对手头的系统了解很多。例如,对于许多动态系统,诸如能量或(角度)动量之类的物理量是完全保守的。因此,神经网络必须从数据中学习这些保护定律,并且仅由于有限的训练时间和随机噪声而被满足。在本文中,我们提出了一种替代方法,该方法使用Noether的定理将保护定律本质地纳入神经网络的体系结构。我们证明,这可以更好地预测三个模型系统:在三维牛顿引力潜能中非偏见粒子的运动,Schwarzschild指标中庞大的相对论粒子的运动和两个相互作用的粒子在四个相互作用的粒子系统中的运动方面。
translated by 谷歌翻译
机器学习,特别是深度学习方法在许多模式识别和数据处理问题,游戏玩法中都优于人类的能力,现在在科学发现中也起着越来越重要的作用。机器学习在分子科学中的关键应用是通过使用密度函数理论,耦合群或其他量子化学方法获得的电子schr \“ odinger方程的Ab-Initio溶液中的势能表面或力场。我们回顾了一种最新和互补的方法:使用机器学习来辅助从第一原理中直接解决量子化学问题。具体来说,我们专注于使用神经网络ANSATZ功能的量子蒙特卡洛(QMC)方法,以解决电子SCHR \ “ Odinger方程在第一和第二量化中,计算场和激发态,并概括多个核构型。与现有的量子化学方法相比,这些新的深QMC方法具有以相对适度的计算成本生成高度准确的Schr \“ Odinger方程的溶液。
translated by 谷歌翻译
设置机器人环境快速测试新开发的算法仍然是一个困难且耗时的过程。这给有兴趣执行现实世界机器人实验的研究人员带来了重大障碍。Robotio是一个旨在解决此问题的Python库。它着重于为机器人,抓地力和摄像机等提供常见,简单和结构化的Python接口。这些接口以及这些接口的实现为常见硬件提供了。此启用使用机器人的代码可以在不同的机器人设置上可移植。在建筑方面,Robotio旨在与OpenAI健身房环境以及ROS兼容。提供了这两种示例。该库与许多有用的工具一起融合在一起,例如相机校准脚本和情节记录功能,这些功能进一步支持算法开发。
translated by 谷歌翻译
演讲者的适应性对于建立强大的自动语音识别(ASR)系统很重要。在这项工作中,我们根据基于配置符号的声学模型(AM)在300H数据集中的功能空间方法研究了扬声器自适应训练(SAT)的各种方法。我们提出了一种称为加权简单添加的方法,该方法将加权的说话者信息向量添加到构象异构体AM的多头自发动模块的输入中。使用此方法用于SAT,我们在HUB5'00和HUB5'01的Callhome部分方面取得了3.5%和4.5%的相对改善。此外,我们以先前的作品为基础,在此基础上,我们为基于构象异构体的混合动力AM提出了一种新颖的竞争培训配方。我们扩展并改善了此食谱,在该配方中,我们在打电筒300H HUB5'00数据集上的单词误差(WER)方面取得了11%的相对改善。我们还通过将参数总数减少34%,从而使该配方有效。
translated by 谷歌翻译